Goto

Collaborating Authors

 safety tool


Schools' safety tools are spying on kids -- even at home

FOX News

A new system called Scanary uses AI and radar to scan up to 25,000 people an hour. School is back in session, but here's something no one told you at orientation: Your kids may have more eyes on them than just their teachers'. Even if you don't have kids in school, you really need to know about this. A new study from UC San Diego uncovered what's really going on with those student safety tools schools buy. You know, the ones that are supposed to stop bullying, flag mental health struggles and prevent school shootings?


Roblox, Discord, OpenAI and Google found new child safety group

Engadget

Roblox, Discord, OpenAI and Google are launching a nonprofit organization called ROOST, or Robust Open Online Safety Tools, which hopes "to build scalable, interoperable safety infrastructure suited for the AI era." The organization plans on providing free, open-source safety tools to public and private organizations to use on their own platforms, with a special focus on child safety to start. The press release announcing ROOST specifically calls out plans to offer "tools to detect, review, and report child sexual abuse material (CSAM)." Partner companies are providing funding for these tools, and the technical expertise to build them, too. The operating theory of ROOST is that access to generative AI is rapidly changing the online landscape, making the need for "reliable and accessible safety infrastructure" all the more urgent.


The limitations of AI safety tools

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain "safety constraints." At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning. Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI "safety tools" are as effective as their creators purport them to be -- or whether they make AI systems safer in any sense. "OpenAI's Safety Gym doesn't feel like'ethics washing' so much as maybe wishful thinking," Mike Cook, an AI researcher at Queen Mary University of London, told VentureBeat via email.


AI safety tools can help mitigate bias in algorithms

#artificialintelligence

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out. As AI proliferates, researchers are beginning to call for technologies that might foster trust in AI-powered systems. According to a survey conducted by KPMG, across five countries -- the U.S., the U.K., Germany, Canada, and Australia -- over a third of the general public says that they're unwilling to place trust in AI systems in general. And in a report published by Pega, only 25% of consumers said they'd trust a decision made by an AI system regarding a qualification for a bank loan, for example.